15 research outputs found

    Fourier Neural Operator with Learned Deformations for PDEs on General Geometries

    Full text link
    Deep learning surrogate models have shown promise in solving partial differential equations (PDEs). Among them, the Fourier neural operator (FNO) achieves good accuracy, and is significantly faster compared to numerical solvers, on a variety of PDEs, such as fluid flows. However, the FNO uses the Fast Fourier transform (FFT), which is limited to rectangular domains with uniform grids. In this work, we propose a new framework, viz., geo-FNO, to solve PDEs on arbitrary geometries. Geo-FNO learns to deform the input (physical) domain, which may be irregular, into a latent space with a uniform grid. The FNO model with the FFT is applied in the latent space. The resulting geo-FNO model has both the computation efficiency of FFT and the flexibility of handling arbitrary geometries. Our geo-FNO is also flexible in terms of its input formats, viz., point clouds, meshes, and design parameters are all valid inputs. We consider a variety of PDEs such as the Elasticity, Plasticity, Euler's, and Navier-Stokes equations, and both forward modeling and inverse design problems. Geo-FNO is 10510^5 times faster than the standard numerical solvers and twice more accurate compared to direct interpolation on existing ML-based PDE solvers such as the standard FNO

    Learning macroscopic internal variables and history dependence from microscopic models

    Full text link
    This paper concerns the study of history dependent phenomena in heterogeneous materials in a two-scale setting where the material is specified at a fine microscopic scale of heterogeneities that is much smaller than the coarse macroscopic scale of application. We specifically study a polycrystalline medium where each grain is governed by crystal plasticity while the solid is subjected to macroscopic dynamic loads. The theory of homogenization allows us to solve the macroscale problem directly with a constitutive relation that is defined implicitly by the solution of the microscale problem. However, the homogenization leads to a highly complex history dependence at the macroscale, one that can be quite different from that at the microscale. In this paper, we examine the use of machine-learning, and especially deep neural networks, to harness data generated by repeatedly solving the finer scale model to: (i) gain insights into the history dependence and the macroscopic internal variables that govern the overall response; and (ii) to create a computationally efficient surrogate of its solution operator, that can directly be used at the coarser scale with no further modeling. We do so by introducing a recurrent neural operator (RNO), and show that: (i) the architecture and the learned internal variables can provide insight into the physics of the macroscopic problem; and (ii) that the RNO can provide multiscale, specifically FE2, accuracy at a cost comparable to a conventional empirical constitutive relation

    Multipole Graph Neural Operator for Parametric Partial Differential Equations

    Get PDF
    One of the main challenges in using deep learning-based methods for simulating physical systems and solving partial differential equations (PDEs) is formulating physics-based data in the desired structure for neural networks. Graph neural networks (GNNs) have gained popularity in this area since graphs offer a natural way of modeling particle interactions and provide a clear way of discretizing the continuum models. However, the graphs constructed for approximating such tasks usually ignore long-range interactions due to unfavorable scaling of the computational complexity with respect to the number of nodes. The errors due to these approximations scale with the discretization of the system, thereby not allowing for generalization under mesh-refinement. Inspired by the classical multipole methods, we purpose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity. Our multi-level formulation is equivalent to recursively adding inducing points to the kernel matrix, unifying GNNs with multi-resolution matrix factorization of the kernel. Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time

    Markov Neural Operators for Learning Chaotic Systems

    Full text link
    Chaotic systems are notoriously challenging to predict because of their instability. Small errors accumulate in the simulation of each time step, resulting in completely different trajectories. However, the trajectories of many prominent chaotic systems live in a low-dimensional subspace (attractor). If the system is Markovian, the attractor is uniquely determined by the Markov operator that maps the evolution of infinitesimal time steps. This makes it possible to predict the behavior of the chaotic system by learning the Markov operator even if we cannot predict the exact trajectory. Recently, a new framework for learning resolution-invariant solution operators for PDEs was proposed, known as neural operators. In this work, we train a Markov neural operator (MNO) with only the local one-step evolution information. We then compose the learned operator to obtain the global attractor and invariant measure. Such a Markov neural operator forms a discrete semigroup and we empirically observe that does not collapse or blow up. Experiments show neural operators are more accurate and stable compared to previous methods on chaotic systems such as the Kuramoto-Sivashinsky and Navier-Stokes equations

    Fourier Neural Operator for Parametric Partial Differential Equations

    Get PDF
    The classical development of neural networks has primarily focused on learning mappings between finite-dimensional Euclidean spaces. Recently, this has been generalized to neural operators that learn mappings between function spaces. For partial differential equations (PDEs), neural operators directly learn the mapping from any functional parametric dependence to the solution. Thus, they learn an entire family of PDEs, in contrast to classical methods which solve one instance of the equation. In this work, we formulate a new neural operator by parameterizing the integral kernel directly in Fourier space, allowing for an expressive and efficient architecture. We perform experiments on Burgers' equation, Darcy flow, and the Navier-Stokes equation (including the turbulent regime). Our Fourier neural operator shows state-of-the-art performance compared to existing neural network methodologies and it is up to three orders of magnitude faster compared to traditional PDE solvers

    Multipole Graph Neural Operator for Parametric Partial Differential Equations

    Get PDF
    One of the main challenges in using deep learning-based methods for simulating physical systems and solving partial differential equations (PDEs) is formulating physics-based data in the desired structure for neural networks. Graph neural networks (GNNs) have gained popularity in this area since graphs offer a natural way of modeling particle interactions and provide a clear way of discretizing the continuum models. However, the graphs constructed for approximating such tasks usually ignore long-range interactions due to unfavorable scaling of the computational complexity with respect to the number of nodes. The errors due to these approximations scale with the discretization of the system, thereby not allowing for generalization under mesh-refinement. Inspired by the classical multipole methods, we propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity. Our multi-level formulation is equivalent to recursively adding inducing points to the kernel matrix, unifying GNNs with multi-resolution matrix factorization of the kernel. Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time

    Physics-Informed Neural Operator for Learning Partial Differential Equations

    Full text link
    Machine learning methods have recently shown promise in solving partial differential equations (PDEs). They can be classified into two broad categories: approximating the solution function and learning the solution operator. The Physics-Informed Neural Network (PINN) is an example of the former while the Fourier neural operator (FNO) is an example of the latter. Both these approaches have shortcomings. The optimization in PINN is challenging and prone to failure, especially on multi-scale dynamic systems. FNO does not suffer from this optimization issue since it carries out supervised learning on a given dataset, but obtaining such data may be too expensive or infeasible. In this work, we propose the physics-informed neural operator (PINO), where we combine the operating-learning and function-optimization frameworks. This integrated approach improves convergence rates and accuracy over both PINN and FNO models. In the operator-learning phase, PINO learns the solution operator over multiple instances of the parametric PDE family. In the test-time optimization phase, PINO optimizes the pre-trained operator ansatz for the querying instance of the PDE. Experiments show PINO outperforms previous ML methods on many popular PDE families while retaining the extraordinary speed-up of FNO compared to solvers. In particular, PINO accurately solves challenging long temporal transient flows and Kolmogorov flows where other baseline ML methods fail to converge

    Strengthening magnesium by design: integrating alloying and dynamic processing

    Full text link
    Magnesium (Mg) has the lowest density of all structural metals and has excellent potential for wide use in structural applications. While pure Mg has inferior mechanical properties; the addition of further elements at various concentrations has produced alloys with enhanced mechanical performance and corrosion resistance. An important consequence of adding such elements is that the saturated Mg matrix can locally decompose to form solute clusters and intermetallic particles, often referred to as precipitates. Controlling the shape, number density, volume fraction, and spatial distribution of solute clusters and precipitates significantly impacts the alloy's plastic response. Conversely, plastic deformation during thermomechanical processing can dramatically impact solute clustering and precipitation. In this paper, we first discuss how solute atoms, solute clusters, and precipitates can improve the mechanical properties of Mg alloys. We do so by primarily comparing three alloy systems: Mg-Al, Mg-Zn, and Mg-Y-based alloys. In the second part, we provide strategies for optimizing such microstructures by controlling nucleation and growth of solute clusters and precipitates during thermomechanical processing. In the third part, we briefly highlight how one can enable inverse design of Mg alloys by a more robust Integrated Computational Materials Design (ICMD) approach
    corecore